376 research outputs found

    Weak gravitational lensing of finite beams

    Full text link
    The standard theory of weak gravitational lensing relies on the infinitesimal light beam approximation. In this context, images are distorted by convergence and shear, the respective sources of which unphysically depend on the resolution of the distribution of matter---the so-called Ricci-Weyl problem. In this letter, we propose a strong-lensing-inspired formalism to describe the lensing of finite beams. We address the Ricci-Weyl problem by showing explicitly that convergence is caused by the matter enclosed by the beam, regardless of its distribution. Furthermore, shear turns out to be systematically enhanced by the finiteness of the beam. This implies, in particular, that the Kaiser-Squires relation between shear and convergence is violated, which could have profound consequences on the interpretation of weak lensing surveys.Comment: 6 pages, 2 figures, v2: matches published version, some typos correcte

    The theory of stochastic cosmological lensing

    Full text link
    On the scale of the light beams subtended by small sources, e.g. supernovae, matter cannot be accurately described as a fluid, which questions the applicability of standard cosmic lensing to those cases. In this article, we propose a new formalism to deal with small-scale lensing as a diffusion process: the Sachs and Jacobi equations governing the propagation of narrow light beams are treated as Langevin equations. We derive the associated Fokker-Planck-Kolmogorov equations, and use them to deduce general analytical results on the mean and dispersion of the angular distance. This formalism is applied to random Einstein-Straus Swiss-cheese models, allowing us to: (1) show an explicit example of the involved calculations; (2) check the validity of the method against both ray-tracing simulations and direct numerical integrations of the Langevin equation. As a byproduct, we obtain a post-Kantowski-Dyer-Roeder approximation, accounting for the effect of tidal distortions on the angular distance, in excellent agreement with numerical results. Besides, the dispersion of the angular distance is correctly reproduced in some regimes.Comment: 37+13 pages, 8 figures. A few typos corrected. Matches published versio

    Generalized Fokker-Planck equation for piecewise-diffusion processes with boundary hitting resets

    No full text
    13 pages, 2 figuresInternational audienceThis paper is concerned with the generalized Fokker-Planck equation for a class of stochastic hybrid processes, where diffusion and instantaneous jumps at the boundary are allowed. The state of the process after a jump is defined by a deterministic reset map. We establish a partial differential equation for the probability density function, which is a generalisation of the usual Fokker-Planck equation for diffusion processes. The result involves a non-local boundary condition, which accounts for the jumping behaviour of the process, and an absorbing boundary condition on the non-characteristic part of the boundary. Two applications are given, with numerical results obtained by finite volume discretization

    Relabeling and Summarizing Posterior Distributions in Signal Decomposition Problems when the Number of Components is Unknown

    Get PDF
    International audienceThis paper addresses the problems of relabeling and summarizing posterior distributions that typically arise, in a Bayesian framework, when dealing with signal decomposition problems with an unknown number of components. Such posterior distributions are defined over union of subspaces of differing dimensionality and can be sampled from using modern Monte Carlo techniques, for instance the increasingly popular RJ-MCMC method. No generic approach is available, however, to summarize the resulting variable-dimensional samples and extract from them component-specific parameters. We propose a novel approach, named Variable-dimensional Approximate Posterior for Relabeling and Summarizing (VAPoRS), to this problem, which consists in approximating the posterior distribution of interest by a "simple"---but still variable-dimensional---parametric distribution. The distance between the two distributions is measured using the Kullback-Leibler divergence, and a Stochastic EM-type algorithm, driven by the RJ-MCMC sampler, is proposed to estimate the parameters. Two signal decomposition problems are considered, to show the capability of VAPoRS both for relabeling and for summarizing variable dimensional posterior distributions: the classical problem of detecting and estimating sinusoids in white Gaussian noise on the one hand, and a particle counting problem motivated by the Pierre Auger project in astrophysics on the other hand

    Summarizing Posterior Distributions in Signal Decomposition Problems when the Number of Components is Unknown

    No full text
    International audienceThis paper addresses the problem of summarizing the posterior distributions that typically arise, in a Bayesian framework, when dealing with signal decomposition problems with unknown number of components. Such posterior distributions are defined over union of subspaces of differing dimensionality and can be sampled from using modern Monte Carlo techniques, for instance the increasingly popular RJ-MCMC method. No generic approach is available, however, to summarize the resulting variable-dimensional samples and extract from them component-specific parameters. We propose a novel approach to this problem, which consists in approximating the complex posterior of interest by a "simple"--but still variable-dimensional--parametric distribution. The distance between the two distributions is measured using the Kullback- Leibler divergence, and a Stochastic EM-type algorithm, driven by the RJ-MCMC sampler, is proposed to estimate the parameters. The proposed algorithm is illustrated on the fundamental signal processing example of joint detection and estimation of sinusoids in white Gaussian noise

    On the joint Bayesian model selection and estimation of sinusoids via reversible jump MCMC in low SNR situations

    No full text
    This paper addresses the behavior in low SNR situations of the algorithm proposed by Andrieu and Doucet (IEEE T. Signal Proces., 47(10), 1999) for the joint Bayesian model selection and estimation of sinusoids in Gaussian white noise. It is shown that the value of a certain hyperparameter, claimed to be weakly influential in the original paper, becomes in fact quite important in this context. This robustness issue is fixed by a suitable modification of the prior distribution, based on model selection considerations. Numerical experiments show that the resulting algorithm is more robust to the value of its hyperparameters

    An empirical Bayes approach for joint Bayesian model selection and estimation of sinusoids via reversible jump MCMC

    No full text
    This paper addresses the sensitivity of the algorithm proposed by Andrieu and Doucet (IEEE Trans. Signal Process., 47(10), 1999), for the joint Bayesian model selection and estimation of sinusoids in white Gaussian noise, to the values of a certain hyperparameter claimed to be weakly influential in the original paper. A deeper study of this issue reveals indeed that the value of this hyperparameter (the scale parameter of the expected signal-to-noise ratio) has a significant influence on 1) the mixing rate of the Markov chain and 2) the posterior distribution of the number of components. As a possible workaround for this problem, we investigate an Empirical Bayes approach to select an appropriate value for this hyperparameter in a data-driven way. Marginal likelihood maximization is performed by means of an importance sampling based Monte Carlo EM (MCEM) algorithm. Numerical experiments illustrate that the sampler equipped with this MCEM procedure provides satisfactory performances in moderate to high SNR situations

    Line-of-sight effects in strong gravitational lensing

    Full text link
    While most strong-gravitational-lensing systems may be roughly modelled by a single massive object between the source and the observer, in the details all the structures near the light path contribute to the observed images. These additional contributions, known as line-of-sight effects, are non-negligible in practice. This article proposes a new theoretical framework to model the line-of-sight effects, together with very promising applications at the interface of weak and strong lensing. Our approach relies on the dominant-lens approximation, where one deflector is treated as the main lens while the others are treated as perturbations. The resulting framework is technically simpler to handle than the multi-plane lensing formalism, while allowing one to consistently model any sub-critical perturbation. In particular, it is not limited to the usual external-convergence and external-shear parameterisation. As a first application, we identify a specific notion of line-of-sight shear that is not degenerate with the ellipticity of the main lens, and which could thus be extracted from strong-lensing images. This result supports and improves the recent proposal that Einstein rings might be powerful probes of cosmic shear. As a second application, we investigate the distortions of strong-lensing critical curves under line-of-sight effects, and more particularly their correlations across the sky. We find that such correlations may be used to probe, not only the large-scale structure of the Universe, but also the dark-matter halo profiles of strong lenses. This last possibility would be a key asset to improve the accuracy of the measurement of the Hubble-Lema\^itre constant via time-delay cosmography.Comment: 39+14 pages, 15 figures. v2: discussion improved in sec. 3.2; figs. 4 and 15 updated. v3: minor typos corrected, matches published version; v4: other typos correcte
    corecore